首页> 外文OA文献 >Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference
【2h】

Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference

机译:基于递归神经网络的句子编码器的门控注意   自然语言推理

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The RepEval 2017 Shared Task aims to evaluate natural language understandingmodels for sentence representation, in which a sentence is represented as afixed-length vector with neural networks and the quality of the representationis tested with a natural language inference task. This paper describes oursystem (alpha) that is ranked among the top in the Shared Task, on both thein-domain test set (obtaining a 74.9% accuracy) and on the cross-domain testset (also attaining a 74.9% accuracy), demonstrating that the model generalizeswell to the cross-domain data. Our model is equipped with intra-sentencegated-attention composition which helps achieve a better performance. Inaddition to submitting our model to the Shared Task, we have also tested it onthe Stanford Natural Language Inference (SNLI) dataset. We obtain an accuracyof 85.5%, which is the best reported result on SNLI when cross-sentenceattention is not allowed, the same condition enforced in RepEval 2017.
机译:RepEval 2017共享任务旨在评估用于句子表示的自然语言理解模型,其中使用神经网络将句子表示为定长向量,并使用自然语言推理任务测试表示的质量。本文描述了我们的系统(alpha),该系统在域内测试集(获得74.9%的准确性)和跨域测试集(也达到74.9%的准确性)上均在“共享任务”中名列前茅。该模型可以很好地推广到跨域数据。我们的模型配备了句子内注意成分,可以帮助您获得更好的表现。除了将模型提交给共享任务外,我们还在斯坦福自然语言推理(SNLI)数据集上对其进行了测试。我们获得85.5%的准确性,这是SNLI上报告的最佳结果,当不允许交叉句子注意时,这与RepEval 2017中强制执行的条件相同。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号